113 research outputs found

    A new perspective on the Propagation-Separation approach: Taking advantage of the propagation condition

    Get PDF
    The Propagation-Separation approach is an iterative procedure for pointwise estimation of local constant and local polynomial functions. The estimator is defined as a weighted mean of the observations with data-driven weights. Within homogeneous regions it ensures a similar behavior as non-adaptive smoothing (propagation), while avoiding smoothing among distinct regions (separation). In order to enable a proof of stability of estimates, the authors of the original study introduced an additional memory step aggregating the estimators of the successive iteration steps. Here, we study theoretical properties of the simplified algorithm, where the memory step is omitted. In particular, we introduce a new strategy for the choice of the adaptation parameter yielding propagation and stability for local constant functions with sharp discontinuities.Comment: 28 pages, 5 figure

    Lepskii Principle in Supervised Learning

    Full text link
    In the setting of supervised learning using reproducing kernel methods, we propose a data-dependent regularization parameter selection rule that is adaptive to the unknown regularity of the target function and is optimal both for the least-square (prediction) error and for the reproducing kernel Hilbert space (reconstruction) norm error. It is based on a modified Lepskii balancing principle using a varying family of norms

    Relaxation of Product Markov Chains on Product Spaces

    Get PDF
    The purpose of the paper is studying the relaxation time of product-type Markov chains on product spaces which approach a product distribution. We determine bounds to approach stationarity for such Markov chains in terms of the mixing times of the component Markov chains. In cases where the component mixing times vary much we propose an optimized visiting scheme which makes such product-type Markov chains comparative to Gibbs-type samplers. We conclude the paper by a discussion of the relaxation of Metropolis-type samplers applied to separable energy functions

    Degree of ill-posedness of statistical inverse problems

    Get PDF
    We introduce the notion of the degree of ill--posedness of linear operators in operator equations between Hilbert spaces. For specific assumptions on the noise this quantity can be computed explicitely. Next it is shown that the degree of ill--posedness as introduced explains the loss of accuracy when solving inverse problems in Hilbert spaces for a variety of instances

    Asymptotically optimal weigthed numerical integration

    Get PDF
    We study numerical integration of Hölder-type functions with respect to weights on the real line. Our study extends previous work by F. Curbera, [2] and relies on a connection between this problem and the approximation of distribution functions by empirical ones. The analysis is based on a lemma which is important within the theory of optimal designs for approximating stochastic processes. As an application we reproduce a variant of the well known result for weighted integration of Brownian paths, see e.g., [8]
    • …
    corecore